72 research outputs found

    Turing machines can be efficiently simulated by the General Purpose Analog Computer

    Full text link
    The Church-Turing thesis states that any sufficiently powerful computational model which captures the notion of algorithm is computationally equivalent to the Turing machine. This equivalence usually holds both at a computability level and at a computational complexity level modulo polynomial reductions. However, the situation is less clear in what concerns models of computation using real numbers, and no analog of the Church-Turing thesis exists for this case. Recently it was shown that some models of computation with real numbers were equivalent from a computability perspective. In particular it was shown that Shannon's General Purpose Analog Computer (GPAC) is equivalent to Computable Analysis. However, little is known about what happens at a computational complexity level. In this paper we shed some light on the connections between this two models, from a computational complexity level, by showing that, modulo polynomial reductions, computations of Turing machines can be simulated by GPACs, without the need of using more (space) resources than those used in the original Turing computation, as long as we are talking about bounded computations. In other words, computations done by the GPAC are as space-efficient as computations done in the context of Computable Analysis

    An Analogue-Digital Model of Computation: Turing Machines with Physical Oracles

    Get PDF
    We introduce an abstract analogue-digital model of computation that couples Turing machines to oracles that are physical processes. Since any oracle has the potential to boost the computational power of a Turing machine, the effect on the power of the Turing machine of adding a physical process raises interesting questions. Do physical processes add significantly to the power of Turing machines; can they break the Turing Barrier? Does the power of the Turing machine vary with different physical processes? Specifically, here, we take a physical oracle to be a physical experiment, controlled by the Turing machine, that measures some physical quantity. There are three protocols of communication between the Turing machine and the oracle that simulate the types of error propagation common to analogue-digital devices, namely: infinite precision, unbounded precision, and fixed precision. These three types of precision introduce three variants of the physical oracle model. On fixing one archetypal experiment, we show how to classify the computational power of the three models by establishing the lower and upper bounds. Using new techniques and ideas about timing, we give a complete classification.info:eu-repo/semantics/publishedVersio

    Thermodynamic graph-rewriting

    Get PDF
    We develop a new thermodynamic approach to stochastic graph-rewriting. The ingredients are a finite set of reversible graph-rewriting rules called generating rules, a finite set of connected graphs P called energy patterns and an energy cost function. The idea is that the generators define the qualitative dynamics, by showing which transformations are possible, while the energy patterns and cost function specify the long-term probability π\pi of any reachable graph. Given the generators and energy patterns, we construct a finite set of rules which (i) has the same qualitative transition system as the generators; and (ii) when equipped with suitable rates, defines a continuous-time Markov chain of which π\pi is the unique fixed point. The construction relies on the use of site graphs and a technique of `growth policy' for quantitative rule refinement which is of independent interest. This division of labour between the qualitative and long-term quantitative aspects of the dynamics leads to intuitive and concise descriptions for realistic models (see the examples in S4 and S5). It also guarantees thermodynamical consistency (AKA detailed balance), otherwise known to be undecidable, which is important for some applications. Finally, it leads to parsimonious parameterizations of models, again an important point in some applications

    Global Versus Local Computations: Fast Computing with Identifiers

    Full text link
    This paper studies what can be computed by using probabilistic local interactions with agents with a very restricted power in polylogarithmic parallel time. It is known that if agents are only finite state (corresponding to the Population Protocol model by Angluin et al.), then only semilinear predicates over the global input can be computed. In fact, if the population starts with a unique leader, these predicates can even be computed in a polylogarithmic parallel time. If identifiers are added (corresponding to the Community Protocol model by Guerraoui and Ruppert), then more global predicates over the input multiset can be computed. Local predicates over the input sorted according to the identifiers can also be computed, as long as the identifiers are ordered. The time of some of those predicates might require exponential parallel time. In this paper, we consider what can be computed with Community Protocol in a polylogarithmic number of parallel interactions. We introduce the class CPPL corresponding to protocols that use O(nlogkn)O(n\log^k n), for some k, expected interactions to compute their predicates, or equivalently a polylogarithmic number of parallel expected interactions. We provide some computable protocols, some boundaries of the class, using the fact that the population can compute its size. We also prove two impossibility results providing some arguments showing that local computations are no longer easy: the population does not have the time to compare a linear number of consecutive identifiers. The Linearly Local languages, such that the rational language (ab)(ab)^*, are not computable.Comment: Long version of SSS 2016 publication, appendixed version of SIROCCO 201

    On the Complexity of Quadratization for Polynomial Differential Equations

    Get PDF
    Chemical reaction networks (CRNs) are a standard formalism used in chemistry and biology to reason about the dynamics of molecular interaction networks. In their interpretation by ordinary differential equations, CRNs provide a Turing-complete model of analog computattion, in the sense that any computable function over the reals can be computed by a finite number of molecular species with a continuous CRN which approximates the result of that function in one of its components in arbitrary precision. The proof of that result is based on a previous result of Bournez et al. on the Turing-completeness of polyno-mial ordinary differential equations with polynomial initial conditions (PIVP). It uses an encoding of real variables by two non-negative variables for concentrations, and a transformation to an equivalent quadratic PIVP (i.e. with degrees at most 2) for restricting ourselves to at most bimolecular reactions. In this paper, we study the theoretical and practical complexities of the quadratic transformation. We show that both problems of minimizing either the number of variables (i.e., molecular species) or the number of monomials (i.e. elementary reactions) in a quadratic transformation of a PIVP are NP-hard. We present an encoding of those problems in MAX-SAT and show the practical complexity of this algorithm on a benchmark of quadratization problems inspired from CRN design problems

    Deciding Reachability for Piecewise Constant Derivative Systems on Orientable Manifolds

    Get PDF
    © 2019 Springer-Verlag. This is a post-peer-review, pre-copyedit version of a paper published in Reachability Problems: 13th International Conference, RP 2019, Brussels, Belgium, September 11–13, 2019, Proceedings. The final authenticated version is available online at: http://dx.doi.org/10.1007/978-3-030-30806-3_14A hybrid automaton is a finite state machine combined with some k real-valued continuous variables, where k determines the number of the automaton dimensions. This formalism is widely used for modelling safety-critical systems, and verification tasks for such systems can often be expressed as the reachability problem for hybrid automata. Asarin, Mysore, Pnueli and Schneider defined classes of hybrid automata lying on the boundary between decidability and undecidability in their seminal paper ‘Low dimensional hybrid systems - decidable, undecidable, don’t know’ [9]. They proved that certain decidable classes become undecidable when given a little additional computational power, and showed that the reachability question remains unsolved for some 2-dimensional systems. Piecewise Constant Derivative Systems on 2-dimensional manifolds (or PCD2m) constitute a class of hybrid automata for which decidability of the reachability problem is unknown. In this paper we show that the reachability problem becomes decidable for PCD2m if we slightly limit their dynamics, and thus we partially answer the open question of Asarin, Mysore, Pnueli and Schneider posed in [9]

    Towards an Axiomatization of Simple Analog Algorithms

    No full text
    International audienceWe propose a formalization of analog algorithms, extending the framework of abstract state machines to continuous-time models of computation

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature

    Confluence and Convergence in Probabilistically Terminating Reduction Systems

    Get PDF
    Convergence of an abstract reduction system (ARS) is the property that any derivation from an initial state will end in the same final state, a.k.a. normal form. We generalize this for probabilistic ARS as almost-sure convergence, meaning that the normal form is reached with probability one, even if diverging derivations may exist. We show and exemplify properties that can be used for proving almost-sure convergence of probabilistic ARS, generalizing known results from ARS.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Non-polynomial Worst-Case Analysis of Recursive Programs

    Full text link
    We study the problem of developing efficient approaches for proving worst-case bounds of non-deterministic recursive programs. Ranking functions are sound and complete for proving termination and worst-case bounds of nonrecursive programs. First, we apply ranking functions to recursion, resulting in measure functions. We show that measure functions provide a sound and complete approach to prove worst-case bounds of non-deterministic recursive programs. Our second contribution is the synthesis of measure functions in nonpolynomial forms. We show that non-polynomial measure functions with logarithm and exponentiation can be synthesized through abstraction of logarithmic or exponentiation terms, Farkas' Lemma, and Handelman's Theorem using linear programming. While previous methods obtain worst-case polynomial bounds, our approach can synthesize bounds of the form O(nlogn)\mathcal{O}(n\log n) as well as O(nr)\mathcal{O}(n^r) where rr is not an integer. We present experimental results to demonstrate that our approach can obtain efficiently worst-case bounds of classical recursive algorithms such as (i) Merge-Sort, the divide-and-conquer algorithm for the Closest-Pair problem, where we obtain O(nlogn)\mathcal{O}(n \log n) worst-case bound, and (ii) Karatsuba's algorithm for polynomial multiplication and Strassen's algorithm for matrix multiplication, where we obtain O(nr)\mathcal{O}(n^r) bound such that rr is not an integer and close to the best-known bounds for the respective algorithms.Comment: 54 Pages, Full Version to CAV 201
    corecore